70 research outputs found

    Estimating risk when zero events have been observed

    Get PDF
    Assessing the risk of complications or adverse events following an intervention presents challenges when they have not yet occurred. Suppose, for instance, a chronic shortage of cardiac telemetry beds has prompted a hospital to implement a new policy that places low-risk patients admitted to ‘rule out myocardial infarction’ in regular ward beds (ie, with no telemetry). After 6 months and the admission of 100 such patients, no cardiac arrests or other untoward events have occurred. This absence of harm (ie, zero adverse events) indicates a low risk, but clearly we cannot infer a risk of zero on the basis of only 100 patients. But, what can we say about the true underlying risk

    Historical Exploration - Learning Lessons from the Past to Inform the Future

    Get PDF
    This report examines a number of exploration campaigns that have taken place during the last 700 years, and considers them from a risk perspective. The explorations are those led by Christopher Columbus, Sir Walter Raleigh, John Franklin, Sir Ernest Shackleton, the Company of Scotland to Darien and the Apollo project undertaken by NASA. To provide a wider context for investigating the selected exploration campaigns, we seek ways of finding analogies at mission, programmatic and strategic levels and thereby to develop common themes. Ultimately, the purpose of the study is to understand how risk has shaped past explorations, in order to learn lessons for the future. From this, we begin to identify and develop tools for assessing strategic risk in future explorations. Figure 0.1 (see Page 6) summarizes the key inputs used to shape the study, the process and the results, and provides a graphical overview of the methodology used in the project. The first step was to identify the potential cases that could be assessed and to create criteria for selection. These criteria were collaboratively developed through discussion with a Business Historian. From this, six cases were identified as meeting our key criteria. Preliminary analysis of two of the cases allowed us to develop an evaluation framework that was used across all six cases to ensure consistency. This framework was revised and developed further as all six cases were analyzed. A narrative and summary statistics were created for each exploration case studied, in addition to a method for visualizing the important dimensions that capture major events. These Risk Experience Diagrams illustrate how the realizations of events, linked to different types of risks, have influenced the historical development of each exploration campaign. From these diagrams, we can begin to compare risks across each of the cases using a common framework. In addition, exploration risks were classified in terms of mission, program and strategic risks. From this, a Venn diagram and Belief Network were developed to identify how different exploration risks interacted. These diagrams allow us to quickly view the key risk drivers and their interactions in each of the historical cases. By looking at the context in which individual missions take place we have been able to observe the dynamics within an exploration campaign, and gain an understanding of how these interact with influences from stakeholders and competitors. A qualitative model has been created to capture how these factors interact, and are further challenged by unwanted events such as mission failures and competitor successes. This Dynamic Systemic Risk Model is generic and applies broadly to all the exploration ventures studied. This model is an amalgamation of a System Dynamics model, hence incorporating the natural feedback loops within each exploration mission, and a risk model, in order to ensure that the unforeseen events that may occur can be incorporated into the modeling. Finally, an overview is given of the motivational drivers and summaries are presented of the overall costs borne in each exploration venture. An important observation is that all the cases - with the exception of Apollo - were failures in terms of meeting their original objectives. However, despite this, several were strategic successes and indeed changed goals as needed in an entrepreneurial way. The Risk Experience Diagrams developed for each case were used to quantitatively assess which risks were realized most often during our case studies and to draw comparisons at mission, program and strategic levels. In addition, using the Risk Experience Diagrams and the narrative of each case, specific lessons for future exploration were identified. There are three key conclusions to this study: Analyses of historical cases have shown that there exists a set of generic risk classes. This set of risk classes cover mission, program and strategic levels, and includes all the risks encountered in the cases studied. At mission level these are Leadership Decisions, Internal Events and External Events; at program level these are Lack of Learning, Resourcing and Mission Failure; at Strategic Level they are Programmatic Failure, Stakeholder Perception and Goal Change. In addition there are two further risks that impact at all levels: Self-Interest of Actors, and False Model. There is no reason to believe that these risk classes will not be applicable to future exploration and colonization campaigns. We have deliberately selected a range of different exploration and colonization campaigns, taking place between the 15th Century and the 20th Century. The generic risk framework is able to describe the significant types of risk for these missions. Furthermore, many of these risks relate to how human beings interact and learn lessons to guide their future behavior. Although we are better schooled than our forebears and are technically further advanced, there is no reason to think we are fundamentally better at identifying, prioritizing and controlling these classes of risk. Modern risk modeling techniques are capable of addressing mission and program risk but are not as well suited to strategic risk. We have observed that strategic risks are prevalent throughout historic exploration and colonization campaigns. However, systematic approaches do not exist at the moment to analyze such risks. A risk-informed approach to understanding what happened in the past helps us guard against the danger of assuming that those events were inevitable, and highlights those chance events that produced the history that the world experienced. In turn, it allows us to learn more clearly from the past about the way our modern risk modeling techniques might help us to manage the future - and also bring to light those areas where they may not. This study has been retrospective. Based on this analysis, the potential for developing the work in a prospective way by applying the risk models to future campaigns is discussed. Follow on work from this study will focus on creating a portfolio of tools for assessing strategic and programmatic risk

    Identification and characterisation of telomere regulatory and signalling pathways after induction of telomere dysfunction

    Get PDF
    Telomeres are DNA-protein complexes which cap the ends of eukaryotic linear chromosomes. In normal somatic cells telomeres shorten and become dysfunctional during ageing due to the DNA end replication problem. This leads to activation of signalling pathways that lead to cellular senescence and apoptosis. However, cancer cells typically bypass this barrier to immortalisation in order to proliferate indefinitely. Therefore enhancing our understanding of telomere dysfunction and pathways involved in regulation of the process is essential. However, the pathways involved are highly complex and involve interaction between a wide range of biological processes. Therefore understanding how telomerase dysfunction is regulated is a challenging task and requires a systems biology approach. In this study I have developed a novel methodology for visualisation and analysis of gene lists focusing on the network level rather than individual or small lists of genes. Application of this methodology to an expression data set and a gene methylation data set allowed me to enhance my understanding of the biology underlying a senescence inducing drug and the process of immortalisation respectively. I then used the methodology to compare the effect of genetic background on induction of telomere uncapping. Telomere uncapping was induced in HCT116 WT, p21-/- and p53-/- cells using a viral vector expressing a mutant variant of hTR, the telomerase RNA template. p21-/- cells showed enhanced sensitivity to telomere uncapping. Analysis of a candidate pathway, Mismatch Repair, revealed a role for the process in response to telomere uncapping and that induction of the pathway was p21 dependent. The methodology was then applied to analysis of the telomerase inhibitor GRN163L and synergistic effects of hypoglycaemia with this drug. HCT116 cells were resistant to GRN163L treatment. However, under hypoglycaemic conditions the dose required for ablation of telomerase activity was reduced significantly and telomere shortening was enhanced. Overall this new methodology has allowed our group and collaborators to identify new biology and improve our understanding of processes regulating telomere dysfunction

    Saddlepoint approximation for the generalized inverse Gaussian Lévy process

    Get PDF
    The generalized inverse Gaussian (GIG) Lévy process is a limit of compound Poisson processes, including the stationary gamma process and the stationary inverse Gaussian process as special cases. However, fitting the GIG Lévy process to data is computationally intractable due to the fact that the marginal distribution of the GIG Lévy process is not convolution-closed. The current work reveals that the marginal distribution of the GIG Lévy process admits a simple yet extremely accurate saddlepoint approximation. Particularly, we prove that if the order parameter of the GIG distribution is greater than or equal to −1, the marginal distribution can be approximated accurately — no need to normalize the saddlepoint density. Accordingly, maximum likelihood estimation is simple and quick, random number generation from the marginal distribution is straightforward by using Monte Carlo methods, and goodness-of-fit testing is undemanding to perform. Therefore, major numerical impediments to the application of the GIG Lévy process are removed. We demonstrate the accuracy of the saddlepoint approximation via various experimental setups

    Immortalization of T-cells is accompanied by gradual changes in CpG methylation resulting in a profile resembling a subset of T-cell leukemias

    Get PDF
    We have previously described gene expression changes during spontaneous immortalization of T-cells, thereby identifying cellular processes important for cell growth crisis escape and unlimited proliferation. Here, we analyze the same model to investigate the role of genome-wide methylation in the immortalization process at different time points pre-crisis and post-crisis using high-resolution arrays. We show that over time in culture there is an overall accumulation of methylation alterations, with preferential increased methylation close to transcription start sites (TSSs), islands, and shore regions. Methylation and gene expression alterations did not correlate for the majority of genes, but for the fraction that correlated, gain of methylation close to TSS was associated with decreased gene expression. Interestingly, the pattern of CpG site methylation observed in immortal T-cell cultures was similar to clinical T-cell acute lymphoblastic leukemia (T-ALL) samples classified as CpG island methylator phenotype positive. These sites were highly overrepresented by polycomb target genes and involved in developmental, cell adhesion, and cell signaling processes. The presence of non-random methylation events in in vitro immortalized T-cell cultures and diagnostic T-ALL samples indicates altered methylation of CpG sites with a possible role in malignant hematopoiesis

    Identification of a selective G1-phase benzimidazolone inhibitor by a senescence-targeted virtual screen using artificial neural networks

    Get PDF
    Cellular senescence is a barrier to tumorigenesis in normal cells and tumour cells undergo senescence responses to genotoxic stimuli, which is a potential target phenotype for cancer therapy. However, in this setting, mixed-mode responses are common with apoptosis the dominant effect. Hence, more selective senescence inducers are required. Here we report a machine learning-based in silico screen to identify potential senescence agonists. We built profiles of differentially affected biological process networks from expression data obtained under induced telomere dysfunction conditions in colorectal cancer cells and matched these to a panel of 17 protein targets with confirmatory screening data in PubChem. We trained a neural network using 3517 compounds identified as active or inactive against these targets. The resulting classification model was used to screen a virtual library of ~2M lead-like compounds. 147 virtual hits were acquired for validation in growth inhibition and senescence-associated β-galactosidase (SA-β-gal) assays. Among the found hits a benzimidazolone compound, CB-20903630, had low micromolar IC50 for growth inhibition of HCT116 cells and selectively induced SA-β-gal activity in the entire treated cell population without cytotoxicity or apoptosis induction. Growth suppression was mediated by G1 blockade involving increased p21 expression and suppressed cyclin B1, CDK1 and CDC25C. Additionally, the compound inhibited growth of multicellular spheroids and caused severe retardation of population kinetics in long term treatments. Preliminary structure-activity and structure clustering analyses are reported and expression analysis of CB-20903630 against other cell cycle suppressor compounds suggested a PI3K/AKT-inhibitor-like profile in normal cells, with different pathways affected in cancer cells

    Analysis of the 5'UTR of HCV genotype 3 grown in vitro in human B cells, T cells, and macrophages

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Previously, we have reported the isolation and molecular characterization of human Hepatitis C virus genotype 1 (HCV-1) from infected patients. We are now reporting an analysis of HCV obtained from patients infected with HCV genotype 3 (HCV-3) as diagnosed by clinical laboratories.</p> <p>Results</p> <p>HCV was cultured <it>in vitro </it>using our system. HCV RNA was isolated from patients' blood and from HCV cultured in various cell types for up to three months. The 5'UTR of these isolates were used for comparisons. Results revealed a number of sequence changes as compared to the serum RNA. The HCV RNA produced efficiently by infected macrophages, B-cells, and T-cells had sequences similar to HCV-1, which suggests that selection of the variants was performed at the level of macrophages. Virus with sequences similar to HCV-1 replicated better in macrophages than HCV having a 5'UTR similar to HCV-3.</p> <p>Conclusions</p> <p>Although HCV-3 replicates in cell types such as B-cells, T-cells, and macrophages, it may require a different primary cell type for the same purpose. Therefore, in our opinion, HCV-3 does not replicate efficiently in macrophages, and patients infected with HCV-3 may contain a population of HCV-1 in their blood.</p

    Why people skip music? On predicting music skips using deep reinforcement learning

    Get PDF
    Music recommender systems are an integral part of our daily life. Recent research has seen a significant effort around black-box recommender based approaches such as Deep Reinforcement Learning (DRL). These advances have led, together with the increasing concerns around users' data collection and privacy, to a strong interest in building responsible recommender systems. A key element of a successful music recommender system is modelling how users interact with streamed content. By first understanding these interactions, insights can be drawn to enable the construction of more transparent and responsible systems. An example of these interactions is skipping behaviour, a signal that can measure users' satisfaction, dissatisfaction, or lack of interest. In this paper, we study the utility of users' historical data for the task of sequentially predicting users' skipping behaviour. To this end, we adapt DRL for this classification task, followed by a post-hoc explainability (SHAP) and ablation analysis of the input state representation. Experimental results from a real-world music streaming dataset (Spotify) demonstrate the effectiveness of our approach in this task by outperforming state-of-the-art models. A comprehensive analysis of our approach and of users' historical data reveals a temporal data leakage problem in the dataset. Our findings indicate that, overall, users' behaviour features are the most discriminative in how our proposed DRL model predicts music skips. Content and contextual features have a lesser effect. This suggests that a limited amount of user data should be collected and leveraged to predict skipping behaviour

    Why people skip music? On predicting music skips using deep reinforcement learning

    Get PDF
    Music recommender systems are an integral part of our daily life. Recent research has seen a significant effort around black-box recommender based approaches such as Deep Reinforcement Learning (DRL). These advances have led, together with the increasing concerns around users' data collection and privacy, to a strong interest in building responsible recommender systems. A key element of a successful music recommender system is modelling how users interact with streamed content. By first understanding these interactions, insights can be drawn to enable the construction of more transparent and responsible systems. An example of these interactions is skipping behaviour, a signal that can measure users’ satisfaction, dissatisfaction, or lack of interest. In this paper, we study the utility of users' historical data for the task of sequentially predicting users' skipping behaviour. To this end, we adapt DRL for this classification task, followed by a post-hoc explainability (SHAP) and ablation analysis of the input state representation. Experimental results from a real-world music streaming dataset (Spotify) demonstrate the effectiveness of our approach in this task by outperforming state-of-the-art models. A comprehensive analysis of our approach and of users’ historical data reveals a temporal data leakage problem in the dataset. Our findings indicate that, overall, users' behaviour features are the most discriminative in how our proposed DRL model predicts music skips. Content and contextual features have a lesser effect. This suggests that a limited amount of user data should be collected and leveraged to predict skipping behaviour

    Discovery of significant variants containing large deletions in the 5'UTR of human hepatitis C virus (HCV)

    Get PDF
    We recently reported the isolation and in vitro replication of hepatitis C virus. These isolates were termed CIMM-HCV and analyzed to establish genotypes and subtypes, which are reported elsewhere. During this analysis, an HCV isolated from a patient was discovered that had large deletions in the 5'UTR. 57% of the HCV RNA found in this patient's sera had 113 or 116 bp deletions. Sequence data showed that domains IIIa to IIIc were missing. Previous studies have suggested that these domains may be important for translation. In vitro replicated HCV from this patient did not contain these deletions, however, it contained a 148 bp deletion in the 5'UTR. Whereas the patient HCV lacked domains IIIa through IIIc, the isolate lacked domains IIIa through IIId. HCV from this patient continues to produce large deletions in vitro, suggesting that the deletion may not be important for the assembly or replication of the virus. This is the first report describing these large deletions
    • …
    corecore